To track the 3D locations and trajectories of the other traffic participants at any given time, modern autonomous vehicles are equipped with multiple cameras that cover the vehicle's full surroundings. Yet, camera-based 3D object tracking methods prioritize optimizing the single-camera setup and resort to post-hoc fusion in a multi-camera setup. In this paper, we propose a method for panoramic 3D object tracking, called CC-3DT, that associates and models object trajectories both temporally and across views, and improves the overall tracking consistency. In particular, our method fuses 3D detections from multiple cameras before association, reducing identity switches significantly and improving motion modeling. Our experiments on large-scale driving datasets show that fusion before association leads to a large margin of improvement over post-hoc fusion. We set a new state-of-the-art with 12.6% improvement in average multi-object tracking accuracy (AMOTA) among all camera-based methods on the competitive NuScenes 3D tracking benchmark, outperforming previously published methods by 6.5% in AMOTA with the same 3D detector.
translated by 谷歌翻译
在本文中,我们解决了物体的主动机器人3D重建问题。特别是,我们研究了带有武器摄像机的移动机器人如何选择有利数量的视图来有效地恢复对象的3D形状。与现有的问题解决方案相反,我们利用了流行的神经辐射字段的对象表示,最近对各种计算机视觉任务显示了令人印象深刻的结果。但是,直接推荐使用这种表示形式的对象的显式3D几何细节,这并不是很直接的,这使得对密度3D重建的下一最佳视图选择问题具有挑战性。本文介绍了基于射线的容积不确定性估计器,该估计量沿对象隐式神经表示的每个光线沿每个射线的重量分布计算重量分布的熵。我们表明,考虑到提出的估计量的新观点,可以推断基础3D几何形状的不确定性。然后,我们提出了一个由基于射线的体积不确定性在基于神经辐射字段的表示中的指导下进行的最佳视图选择策略。令人鼓舞的关于合成和现实世界数据的实验结果表明,本文提出的方法可以使新的研究方向在机器人视觉应用中使用隐式的3D对象表示对次要的观察问题,从而将我们的方法与现有方法区分开依赖于显式3D几何建模的方法。
translated by 谷歌翻译
本文主张在运动(NRSFM)中使用有机先验的经典非刚性结构。通过有机先验,我们的意思是无价的中间信息与NRSFM基质分解理论固有的固有信息。结果表明,这种先验居住在分解的矩阵中,而且令人惊讶的是,现有方法通常会忽略它们。该论文的主要贡献是提出一种简单,有条不紊,实用的方法,该方法可以有效利用这种有机先验来求解NRSFM。所提出的方法除了在低级别形状上流行的方法外,没有其他假设,并为NRSFM提供了可靠的解决方案。我们的工作表明,有机先验的可访问性与摄像机运动和形状变形类型无关。除此之外,本文还提供了对NRSFM分解的见解 - 无论是在形状,运动方面 - 还是显示NRSFM单旋均益处的第一种方法。此外,我们概述了如何使用拟议的基于有机先验的方法有效地恢复运动和非刚性3D形状,并证明结果表现出明显的余量优于先前的无NRSFM性能。最后,我们通过对几个基准数据集进行广泛的实验和评估来介绍我们方法的好处。
translated by 谷歌翻译
许多手持或混合现实设备与单个传感器一起用于3D重建,尽管它们通常包含多个传感器。多传感器深度融合能够实质上提高3D重建方法的鲁棒性和准确性,但是现有技术不够强大,无法处理具有不同值范围以及噪声范围以及噪声和离群统计数据的传感器。为此,我们介绍了Senfunet,这是一种深度融合方法,它可以学习传感器特定的噪声和离群统计数据,并以在线方式将深度框架的数据流组合在一起。我们的方法融合了多传感器深度流,而不论时间同步和校准如何,并且在很少的训练数据中概括了。我们在现实世界中和scene3D数据集以及副本数据集上使用各种传感器组合进行实验。实验表明,我们的融合策略表现优于传统和最新的在线深度融合方法。此外,多个传感器的组合比使用单个传感器更加可靠的离群处理和更精确的表面重建。源代码和数据可在https://github.com/tfy14esa/senfunet上获得。
translated by 谷歌翻译
本文提出了一个实时的在线视觉框架,共同恢复室内场景的3D结构和语义标签。给定嘈杂的深度地图,相机轨迹和火车时间的2D语义标签,所提出的深度神经网络的方法学会融合在场景空间中具有合适的语义标签的框架。我们的方法利用现场特征空间中深度和语义的联合体积表示来解决此任务。对于实时语义标签和几何形状的引人注目的在线融合,我们介绍了一个高效的涡流池块,同时删除了在线深度融合中的路由网络,以保持高频表面细节。我们表明场景的语义提供的上下文信息有助于深度融合网络学习抗噪声功能。不仅如此,它有助于克服当前在线深度融合方法的缺点,在处理薄物体结构,增厚伪像和假表面。 Replica DataSet上的实验评估表明,我们的方法可以在每秒37和10帧中执行深度融合,平均重建F分数分别为88%和91%,具体取决于深度图分辨率。此外,我们的模型在Scannet 3D语义基准排行榜上显示了0.515的平均iou得分。
translated by 谷歌翻译
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
translated by 谷歌翻译
Searching long egocentric videos with natural language queries (NLQ) has compelling applications in augmented reality and robotics, where a fluid index into everything that a person (agent) has seen before could augment human memory and surface relevant information on demand. However, the structured nature of the learning problem (free-form text query inputs, localized video temporal window outputs) and its needle-in-a-haystack nature makes it both technically challenging and expensive to supervise. We introduce Narrations-as-Queries (NaQ), a data augmentation strategy that transforms standard video-text narrations into training data for a video query localization model. Validating our idea on the Ego4D benchmark, we find it has tremendous impact in practice. NaQ improves multiple top models by substantial margins (even doubling their accuracy), and yields the very best results to date on the Ego4D NLQ challenge, soundly outperforming all challenge winners in the CVPR and ECCV 2022 competitions and topping the current public leaderboard. Beyond achieving the state-of-the-art for NLQ, we also demonstrate unique properties of our approach such as gains on long-tail object queries, and the ability to perform zero-shot and few-shot NLQ.
translated by 谷歌翻译
Machine Translation (MT) system generally aims at automatic representation of source language into target language retaining the originality of context using various Natural Language Processing (NLP) techniques. Among various NLP methods, Statistical Machine Translation(SMT). SMT uses probabilistic and statistical techniques to analyze information and conversion. This paper canvasses about the development of bilingual SMT models for translating English to fifteen low-resource Indian Languages (ILs) and vice versa. At the outset, all 15 languages are briefed with a short description related to our experimental need. Further, a detailed analysis of Samanantar and OPUS dataset for model building, along with standard benchmark dataset (Flores-200) for fine-tuning and testing, is done as a part of our experiment. Different preprocessing approaches are proposed in this paper to handle the noise of the dataset. To create the system, MOSES open-source SMT toolkit is explored. Distance reordering is utilized with the aim to understand the rules of grammar and context-dependent adjustments through a phrase reordering categorization framework. In our experiment, the quality of the translation is evaluated using standard metrics such as BLEU, METEOR, and RIBES
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Cashews are grown by over 3 million smallholders in more than 40 countries worldwide as a principal source of income. As the third largest cashew producer in Africa, Benin has nearly 200,000 smallholder cashew growers contributing 15% of the country's national export earnings. However, a lack of information on where and how cashew trees grow across the country hinders decision-making that could support increased cashew production and poverty alleviation. By leveraging 2.4-m Planet Basemaps and 0.5-m aerial imagery, newly developed deep learning algorithms, and large-scale ground truth datasets, we successfully produced the first national map of cashew in Benin and characterized the expansion of cashew plantations between 2015 and 2021. In particular, we developed a SpatioTemporal Classification with Attention (STCA) model to map the distribution of cashew plantations, which can fully capture texture information from discriminative time steps during a growing season. We further developed a Clustering Augmented Self-supervised Temporal Classification (CASTC) model to distinguish high-density versus low-density cashew plantations by automatic feature extraction and optimized clustering. Results show that the STCA model has an overall accuracy of 80% and the CASTC model achieved an overall accuracy of 77.9%. We found that the cashew area in Benin has doubled from 2015 to 2021 with 60% of new plantation development coming from cropland or fallow land, while encroachment of cashew plantations into protected areas has increased by 70%. Only half of cashew plantations were high-density in 2021, suggesting high potential for intensification. Our study illustrates the power of combining high-resolution remote sensing imagery and state-of-the-art deep learning algorithms to better understand tree crops in the heterogeneous smallholder landscape.
translated by 谷歌翻译